Jak vyhodnocováním $f(x)$ v různých bodech zjistit $f'(x)$. Vyjdeme z definice $$ f'(x) = \lim_{h\to 0}\frac{f(x+h)-f(x)}{h} $$
Diskretizace, otázka přesnosti (chyba metody a zaokrouhlovací chyba), dopředné diference, centrální diference
Asymptotické chování chyby a její eliminace - Richardsonova extrapolace
In [1]:
%matplotlib inline
%config InlineBackend.close_figures=False
import pylab as P
import numpy as N
In [2]:
#define function for calculating the forward differences:
def diff_FD(f, x, dx): return (f(x+dx) - f(x))/dx
In [3]:
#the asymptotic error of the above functions obtained by Taylor expansion:
def diff_FD_err(f, d2f, d3f, x, dx): return 0.5*d2f(x)*dx
In [4]:
# use function sin(x) to test our implementation
def f(x): return N.sin(x)
def dfdx(x): return N.cos(x)
def dfdx2(x): return -N.sin(x)
def dfdx3(x): return -N.cos(x)
In [5]:
# calculate the derivative at x0=1. for the following values of dx
x0 = 1.0
dx = N.logspace(-17, 1, 500)
In [6]:
P.figure(figsize=(8,5))
rel_err = N.abs((diff_FD(f, x0, dx) - dfdx(x0))/dfdx(x0))
P.loglog(dx, rel_err)
# some plotting setup to make it look nice
%config InlineBackend.close_figures=False
from matplotlib.ticker import LogLocator
P.ylim([1e-13, 1e1])
P.gca().xaxis.set_major_locator(LogLocator(base=100., numticks=20))
P.xlabel("step size")
P.ylabel("rel. error")
P.grid(which="both")
In [13]:
#compare with the asymptotic error value
err = N.abs(diff_FD_err(f, dfdx2, dfdx3, x0, dx)/dfdx(x0))
P.loglog(dx, err,":k")
Out[13]:
In [7]:
#define functions for calculating the centered differences:
def diff_CD(f, x, dx): return (f(x+dx) - f(x-dx))/(2*dx)
#the asymptotic errors of the above functions obtained by Taylor expansion:
def diff_CD_err(f, d2f, d3f, x, dx): return 1/6.*d3f(x)*dx**2
In [8]:
rel_err = N.abs((diff_CD(f, x0, dx) - dfdx(x0))/dfdx(x0))
P.loglog(dx, rel_err)
Out[8]:
In [9]:
#compare with the asymptotic error value
err = N.abs(diff_CD_err(f, dfdx2, dfdx3, x0, dx)/dfdx(x0))
P.loglog(dx, err,":k")
Out[9]:
In [18]:
# try to apply the Richardson extrapolation to both methods once, and finally try to subtract first
# 4 terms of the asymptotic error expansion
def richardson(f, x0, dx, p):
if len(p) > 1:
return (2**p[0]*richardson(f, x0, dx, p[1:]) -\
richardson(f, x0, 2*dx, p[1:]))/(2**p[0]-1)
return (2**p[0]*operator(f, x0, dx) - operator(f, x0, 2*dx))/(2**p[0]-1)
for operator, p1 in [(diff_FD, [1]), (diff_CD, [2]), (diff_CD, [2, 4, 6, 8])]:
rel_err = N.abs((richardson(f, x0, dx, p1) - dfdx(x0))/dfdx(x0))
P.loglog(dx, rel_err,"--")
Indeed we can easily obtain a method of 10$^{\rm th}$ order accuracy. Although maybe not the most efficient one.
In [ ]: